Seongyong KIM Kong-Joo LEE Key-Sun CHOI
We propose a normalization scheme of syntactic structures using a binary phrase structure grammar with composite labels. The normalization adopts binary rules so that the dependency between two sub-trees can be represented in the label of the tree. The label of a tree is composed of two attributes, each of which is extracted from each sub-tree, so that it can represent the compositional information of the tree. The composite label is generated from part-of-speech tags using an automatic labelling algorithm. Since the proposed normalization scheme is binary and uses only part-of-speech information, it can readily be used to compare the results of different syntactic analyses independently of their syntactic description and can be applied to other languages as well. It can also be used for syntactic analysis, which performs higher than the previous syntactic description for Korean corpus. We implement a tool that transforms a syntactic description into normalized one based on this proposed scheme. It can help construct a unified syntactic corpus and extract syntactic information from various types of syntactic corpus in a uniform way.
The conventional shape from focus (SFF) methods have inaccuracies because of piecewise constant approximation of the focused image surface (FIS). We propose a more accurate scheme for SFF based on representation of three-dimensional FIS in terms of neural network weights. The neural networks are trained to learn the shape of the FIS that maximizes the focus measure.
Bong Dae CHOI Dong Bi ZHU Chang Sun CHOI
We propose and analyze a new efficient handoff scheme called Splitted-Rating Channel Scheme in UMTS networks, and we analyze the call level performance of splitted-rating channel scheme and then packet level performance of downlink traffic at UMTS circuit-switched networks. In order to reduce the blocking probability of originating calls and the forced termination probability of handoff calls, a splitted-rating channel scheme is applied to the multimedia UMTS networks. This multimedia network supports two classes of calls; narrowband call requiring one channel and wideband call requiring multiple channels. The channels in service for wideband call are splitted its channels for lending to originating call and handoff call according to threshold control policy. By assuming that arrivals of narrowband calls and arrivals of wideband calls are Poisson, we model the number of narrowband calls and the number of wideband calls in the one cell by Level Dependent Quasi-Birth-Death (QBD) process and obtain their joint stationary distribution. For packet level analysis, we first describe the downlink traffic from the base station to a mobile terminal in UMTS networks, and calculate the mean packet delay of a connected wideband call by using QBD analysis. Numerical examples show that our splitted-rating channel scheme reduces the blocking probability of originating call and the forced termination probability of handoff call with a little degradation of packet delay.
Asifullah KHAN Syed Fahad TAHIR Tae-Sun CHOI
We present a novel approach to developing Machine Learning (ML) based decoding models for extracting a watermark in the presence of attacks. Statistical characterization of the components of various frequency bands is exploited to allow blind extraction of the watermark. Experimental results show that the proposed ML based decoding scheme can adapt to suit the watermark application by learning the alterations in the feature space incurred by the attack employed.
Entity descriptions have been exponentially growing in community-generated knowledge databases, such as DBpedia. However, many of those descriptions are not useful for identifying the underlying characteristics of their corresponding entities because semantically redundant facts or triples are included in the descriptions that represent the connections between entities without any semantic properties. Entity summarization is applied to filter out such non-informative triples and meaning-redundant triples and rank the remaining informative facts within the size of the triples for summarization. This study proposes an entity summarization approach based on pre-grouping the entities that share a set of attributes that can be used to characterize the entities we want to summarize. Entities are first grouped according to projected multilingual categories that provide the multi-angled semantics of each entity into a single entity space. Key facts about the entity are then determined through in-group-based rankings. As a result, our proposed approach produced summary information of significantly better quality (p-value =1.52×10-3 and 2.01×10-3 for the top-10 and -5 summaries, respectively) than the state-of-the-art method that requires additional external resources.
Myung-Seok CHOI Kong-Joo LEE Key-Sun CHOI Gil Chang KIM
It is not always possible to find a global parse for an input sentence owing to problems such as errors of a sentence, incompleteness of lexicon and grammar. Partial parsing is an alternative approach to respond to these problems. Partial parsing techniques try to recover syntactic information efficiently and reliably by sacrificing completeness and depth of analysis. One of the difficulties in partial parsing is how the grammar might be automatically extracted. In this paper we present a method of automatically extracting partial parsing rules from a tree-annotated corpus using the decision tree method. Our goal is deterministic global parsing using partial parsing rules, in other words, to extract partial parsing rules with higher accuracy and broader expansion. First, we define a rule template that enables to learn a subtree for a given substring, so that the resultant rules can be more specific and stricter to apply. Second, rule candidates extracted from a training corpus are enriched with contextual and lexical information using the decision tree method and verified through cross-validation. Last, we underspecify non-deterministic rules by merging substructures with ambiguity in those rules. The learned grammar is similar to phrase structure grammar with contextual and lexical information, but allows building structures of depth one or more. Thanks to automatic learning, the partial parsing rules can be consistent and domain-independent. Partial parsing with this grammar processes an input sentence deterministically using longest-match heuristics, and recursively applies rules to an input sentence. The experiments showed that the partial parser using automatically extracted rules is not only accurate and efficient but also achieves reasonable coverage for Korean.
In this paper, we present two fast motion estimation techniques with adaptive variable search range using spatial and temporal correlation of moving pictures respectively. The first technique uses a frame difference between two adjacent frames which is used as a criterion for deciding search window size. The second one uses deviation between the past and the predicted current frame motion vectors which is also used as a criterion for deciding search window size. Simulation results show that these methods reduce the number of checking points while keeping almost the same image quality as that of full search method.
Vitaly KOBER Josue ALVAREZ-BORREGO Tae Sun CHOI
Karhunen-Loeve (KL) transform is optimal for many signal detection, communication and filtering applications. An explicit solution of the KL integral equation for a practical case when the covariance function of a stationary process is exponentially oscillating is proposed.
Aamir Saeed MALIK Tae-Sun CHOI
A classification method is presented for differentiating honeycombed High Resolution Computed Tomographic (HRCT) images from normal HRCT images. For successful classification of honeycombed HRCT images, a complete set of methods and algorithms is described from segmentation to extraction to feature selection to classification. Wavelet energy is selected as a feature for classification using K-means clustering. Test data of 20 patients are used to validate the method.
Conventional spatial transform based motion estimation algorithms are not practical because of their heavy computational loads. In this paper, we proposed motion estimation method with variable grid size, which is more efficient than conventional spatial transform based methods and gives better PSNR performance than conventional BMA.
Young-Hee KIM Jong-Ki NAM Young-Soo SOHN Hong-June PARK Ki-Bong KU Jae-Kyung WEE Joo-Sun CHOI Choon-Sung PARK
A fully on-chip current controlled open-drain output driver using a bandgap reference current generator was designed for high bandwidth DRAMs. It reduces the overhead of receiving a digital code from an external source for the compensation of the temperature and supply voltage variations. The correct value of the current control register is updated at the end of every auto refresh cycle. The operation at the data rate up to 0.8 Gb/s was verified by SPICE simulation using a 0.22 µm triple-well CMOS technology.
To reduce an amount of computation of full search algorithm for fast motion estimation, we propose a new and fast matching algorithm without any degradation of predicted images. The computational reduction without any degradation comes from adaptive matching scan algorithm according to the image complexity of the reference block in current frame. Experimentally, we significantly reduce the computational load compared with conventional full search algorithm.
We propose a new method for Depth from Defocus (DFD) using wavelet transform. Most of the existing DFD methods use inverse filtering in a transform domain to determine the measure of defocus. These methods suffer from inaccuracies in finding the frequency domain representation due to windowing and border effects. The proposed method uses wavelets that allow performing both the local analysis and windowing with variable-sized regions for images with varying textural properties. Experimental results show that the proposed method gives more accurate depth maps than the previous methods.
Sung-Sun CHOI Han-Yeol YU Yong-Hoon KIM
This paper presents a current-reused quadrature voltage-controlled oscillator (QVCO) which adopts a source-connection coupling structure. The QVCO simultaneously achieves low phase noise and low power consumption by newly combining current-reused VCOs and coupling transistors. The measured QVCO obtains good FoM of -188.2 dBc at a frequency of 2.2 GHz with 3.96 mW power consumption.
This paper presents a novel wavelet compression technique to increase compression of images. Based on zerotree entropy coding method, this technique initially uses only two symbols (significant and zerotree) to compress image data for each level. Additionally, sign bit is used for newly significant coefficients to indicate them being positive or negative. Contrary to isolated zero symbols used in conventional zerotree algorithms, the proposed algorithm changes them to significant coefficients and saves its location, they are then treated just like other significant coefficients. This is done to decrease number of symbols and hence, decrease number of bits to represent the symbols used. In the end, algorithm indicates isolated zero coordinates that are used to change the value back to original during reconstruction. Noticeably high compression ratio is achieved for most of the images, without changing image quality.
Insoo KIM Jincheol YOO JongSoo KIM Kyusun CHOI
Threshold Inverter Quantization (TIQ) technique has been gaining its importance in high speed flash A/D converters due to its fast data conversion speed. It eliminates the need of resistor ladders for reference voltages generation which requires substantial power consumption. The key to TIQ comparators design is to generate 2n - 1 different sized TIQ comparators for an n-bit A/D converter. This paper presents a highly efficient TIQ comparator design methodology based on an analytical model as well as SPICE simulation experimental model. One can find any sets of TIQ comparators efficiently using the proposed method. A 6-bit TIQ A/D converter has been designed in a 0.18 µm standard CMOS technology using the proposed method, and compared to the previous measured results in order to verify the proposed methodology.